197 research outputs found

    CumuloNimbo: Una plataforma como servicio con procesamiento transaccional altamente escalable.

    Get PDF
    El modelo de computaci¿on en la nube (cloud computing) ha ganado mucha popularidad en los últimos años, prueba de ello es la cantidad de productos que distintas empresas han lanzado para ofrecer software, capacidad de procesamiento y servicios en la nube. Para una empresa el mover sus aplicaciones a la nube, con el fin de garantizar disponibilidad y escalabilidad de las mismas y un ahorro de costes, no es una tarea fácil. El principal problema es que las aplicaciones tienen que ser rediseñadas porque las plataformas de computaci¿on en la nube presentan restricciones que no tienen los entornos tradicionales. En este artículo presentamos CumuloNimbo, una plataforma para computación en la nube que permite la ejecución y migración de manera transparente de aplicaciones multi-capa en la nube. Una de las principales características de CumuloNimbo es la gestión de transacciones altamente escalable y coherente. El artículo describe la arquitectura del sistema, así como una evaluaci¿on de la escalabilidad del mismo

    Fault-Tolerant Business Processes

    Full text link
    Abstract. Service-oriented computing (SOC) paradigm promotes the idea of assembling application components into a network of loosely coupled services. Web services are the most promising SOC-based technology. A BPEL process definition represents a composite service that encapsulates some complex business logic including the invocation to other (external) web services. The complexity of a BPEL process together with the invocation of external services subject to network and computer failures requires countermeasures to tolerate this kind of failures. In this paper we present an overview of FT-BPEL, a fault-tolerant implementation of BPEL that copes both with failures of the machine running the BPEL process and network failures in a transparent way, that is, after a failure the system is able to resume the BPEL process consistently

    CumuloNimbo: A highly-scalable transaction processing platform as a service.

    Get PDF
    One of the main challenges facing next generation Cloud platform services is the need to simultaneously achieve ease of programming, consistency, and high scalability. Big Data applications have so far focused on batch processing. The next step for Big Data is to move to the online world. This shift will raise the requirements for transactional guarantees. CumuloNimbo is a new EC-funded project led by Universidad Politécnica de Madrid (UPM) that addresses these issues via a highly scalable multi-tier transactional platform as a service (PaaS) that bridges the gap between OLTP and Big Data applications

    A multi-resource load balancing algorithm for cloud cache systems

    Get PDF
    With the advent of cloud computing model, distributed caches have become the cornerstone for building scalable applications. Popular systems like Facebook [1] or Twitter use Memcached [5], a highly scalable distributed object cache, to speed up applications by avoiding database accesses. Distributed object caches assign objects to cache instances based on a hashing function, and objects are not moved from a cache instance to another unless more instances are added to the cache and objects are redistributed. This may lead to situations where some cache instances are overloaded when some of the objects they store are frequently accessed, while other cache instances are less frequently used. In this paper we propose a multi-resource load balancing algorithm for distributed cache systems. The algorithm aims at balancing both CPU and Memory resources among cache instances by redistributing stored data. Considering the possible conflict of balancing multiple resources at the same time, we give CPU and Memory resources weighted priorities based on the runtime load distributions. A scarcer resource is given a higher weight than a less scarce resource when load balancing. The system imbalance degree is evaluated based on monitoring information, and the utility load of a node, a unit for resource consumption. Besides, since continuous rebalance of the system may affect the QoS of applications utilizing the cache system, our data selection policy ensures that each data migration minimizes the system imbalance degree and hence, the total reconfiguration cost can be minimized. An extensive simulation is conducted to compare our policy with other policies. Our policy shows a significant improvement in time efficiency and decrease in reconfiguration cost

    Ajitts: adaptive just-in-time transaction scheduling

    Get PDF
    Lecture Notes in Computer Science 7891, 2013Distributed transaction processing has benefited greatly from optimistic concurrency control protocols thus avoiding costly fine-grained synchronization. However, the performance of these protocols degrades significantly when the workload increases, namely, by leading to a substantial amount of aborted transactions due to concurrency conflicts. Our approach stems from the observation that when the abort rate increases with the load as already executed transactions queue for longer periods of time waiting for their turn to be certified and committed. We thus propose an adaptive algorithm for judiciously scheduling transactions to minimize the time during which these are vulnerable to being aborted by concurrent transactions, thereby reducing the overall abort rate. We do so by throttling transaction execution using an adaptive mechanism based on the locally known state of globally executing transactions, that includes out-of-order execution. Our evaluation using traces from the industry standard TPC-E workload shows that the amount of aborted transactions can be kept bounded as system load increases, while at the same time fully utilizing system resources and thus scaling transaction processing throughput.(undefined

    A big data platform for large scale event processing

    Get PDF
    To date, big data applications have focused on the store-and-process paradigm. In this paper we describe an initiative to deal with big data applications for continuous streams of events. In many emerging applications, the volume of data being streamed is so large that the traditional ‘store-then-process’ paradigm is either not suitable or too inefficient. Moreover, soft-real time requirements might severely limit the engineering solutions. Many scenarios fit this description. In network security for cloud data centres, for instance, very high volumes of IP packets and events from sensors at firewalls, network switches and routers and servers need to be analyzed and should detect attacks in minimal time, in order to limit the effect of the malicious activity over the IT infrastructure. Similarly, in the fraud department of a credit card company, payment requests should be processed online and need to be processed as quickly as possible in order to provide meaningful results in real-time. An ideal system would detect fraud during the authorization process that lasts hundreds of milliseconds and deny the payment authorization, minimizing the damage to the user and the credit card company

    StreamCloud: An elastic and scalable data streaming system

    Get PDF
    Many applications in several domains such as telecommunications, network security, large scale sensor networks, require online processing of continuous data lows. They produce very high loads that requires aggregating the processing capacity of many nodes. Current Stream Processing Engines do not scale with the input load due to single-node bottlenecks. Additionally, they are based on static con?gurations that lead to either under or over-provisioning. In this paper, we present StreamCloud, a scalable and elastic stream processing engine for processing large data stream volumes. StreamCloud uses a novel parallelization technique that splits queries into subqueries that are allocated to independent sets of nodes in a way that minimizes the distribution overhead. Its elastic protocols exhibit low intrusiveness, enabling effective adjustment of resources to the incoming load. Elasticity is combined with dynamic load balancing to minimize the computational resources used. The paper presents the system design, implementation and a thorough evaluation of the scalability and elasticity of the fully implemented system

    Effect of the calcium phosphorus ratio on bone characteristics, percent of ashes and skeletal integrity of broilers

    Get PDF
    El objetivo del presente ensayo fue determinar el efecto de la relación calcio (Ca) y fósforo disponible (Pd) sobre la morfometría y mineralización ósea e integridad esquelética de pollos de carne. Se utilizaron 60 pollos de la línea Cobb 500 de un día de edad distribuidos en 12 unidades experimentales y asignados, durante 21 días, a uno de los siguientes tratamientos: T1, Ca:Pd baja (1.13); T2, Ca:Pd media (1.55); T3, Ca:Pd adecuada (1.98). Todas las dietas contenían 3072 Kcal de EM/kg y 20.50 % de proteína cruda. El alimento, en forma de harina, y el agua fue ofrecido ad libitum. En el día 21 de edad se sacrificaron todos los animales para extraer fémures, tibias y metatarsos. Se evaluaron variables que incluyeron morfometría ósea (peso, longitud, ancho, volumen y densidad), integridad esquelética (capacidad para caminar y lesiones en la cabeza femoral) y se determinó el contenido de ceniza en la tibia. La data fue analizada empleando un Diseño Completamente al Azar usando el procedimiento ANOVA y la prueba de Duncan para la comparación de medias. Los resultados mostraron que el contenido de ceniza en tibia fue afectado significativamente (p<0.01) por cada cambio en la relación Ca:Pd. En comparación con las relaciones Ca:Pd de 1.98 y 1.55, la relación Ca:Pd de 1.13 afectó la capacidad para caminar (p<0.03) y la densidad de la tibia (p<0.05). En conclusión, el contenido de ceniza en tibia es sensible a cambios en la relación Ca:Pd, mientras que indicadores de integridad esquelética solo son afectados cuando la relación Ca:Pd está por debajo de 1.55.The aim of this study was to determine the effect of the ratio calcium (Ca) and available phosphorus (AP) on the bone morphometry and mineralization, and skeletal integrity of broilers. Sixty one day-old Cobb 500 chicks were distributed in 12 experimental units and assigned, for 21 days, to one of the following treatments: T1, low Ca:Pd (1.13); T2, medium Ca:Pd (1.55); T3, adequate Ca:Pd (1.98). All diets contained 3072 Kcal ME/kg and 20.50% crude protein. Feeds, in meal form, and water were offered ad libitum. At day 21 of age all animals were slaughtered to extract the femur, tibia and metatarsus. Variables that included bone morphometry (weight, length, width, volume and density), skeletal integrity (ability to walk and lesions in the femoral head) were measured and the content of ash in the tibia was determined. The data was analyzed under a Complete Randomized Design using the ANOVA procedure and Duncan Test for comparison of the means. The results showed that the content of ash in tibia was significantly affected (p<0.01) by the dietary Ca:Pd ratio. Compared with the 1.98 and 1.55 Ca:Pd ratios, the 1.13 Ca:Pd ratio affected (p<0.03) the capacity to walk and the density of the tibia (p<0.05). In conclusion, the ash content in tibia is sensitive to changes in the Ca:Pd ratios while skeletal integrity indicators are only affected when the Ca:Pd ratio was lower than1.55

    Nonreferral of Possible Soft Tissue Sarcomas in Adults: A Dangerous Omission in Policy

    Get PDF
    Introduction. The aim of this study is to compare outcomes in three groups of STS patients treated in our specialist centre: patients referred immediately after an inadequate initial treatment, patients referred after a local recurrence, and patients referred directly, prior to any treatment. Patients and methods. We reviewed all our nonmetastatic extremity-STS patients with a minimum follow-up of 2 years. We compared three patient groups: those referred directly to our centre (group A), those referred after an inadequate initial excision (group B), and patients with local recurrence (group C). Results. The study included 174 patients. Disease-free survival was 73%, 76%, and 28% in groups A, B, and C, respectively (P < .001). Depth, size, and histologic grade influenced the outcome in groups A and B, but not in C. Conclusion. Initial wide surgical treatment is the main factor that determines local control, being even more important than the known intrinsic prognostic factors of tumour size, depth, and histologic grade. The influence on outcome of initial wide local excision (WLE), which is made possible by referral to a specialist centre, is paramount
    corecore